Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
光学相干断层扫描(OCT)是一种非侵入性技术,可在微米分辨率中捕获视网膜的横截面区域。它已被广泛用作辅助成像参考,以检测与眼睛有关的病理学并预测疾病特征的纵向进展。视网膜层分割是至关重要的特征提取技术之一,其中视网膜层厚度的变化和由于液体的存在而引起的视网膜层变形高度相关,与多种流行性眼部疾病(如糖尿病性视网膜病)和年龄相关的黄斑疾病高度相关。变性(AMD)。但是,这些图像是从具有不同强度分布或换句话说的不同设备中获取的,属于不同的成像域。本文提出了一种分割引导的域适应方法,以将来自多个设备的图像调整为单个图像域,其中可用的最先进的预训练模型可用。它避免了即将推出的新数据集的手动标签的时间消耗以及现有网络的重新培训。网络的语义一致性和全球特征一致性将最大程度地减少许多研究人员报告的幻觉效果,这些效应对周期矛盾的生成对抗网络(Cyclegan)体系结构。
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在实际情况下,较大的全局图的子图可以分布在多个设备或机构之间,并且仅由于隐私限制而在本地访问,尽管它们之间可能存在链接。最近,拟议的子图联合学习(FL)方法涉及跨私人本地子图的那些缺失的链接,而分布式培训图形神经网络(GNN)。但是,他们忽略了子图中的不可避免的异质性,这是由包含全球图的不同部分的子图引起的。例如,一个子图可能属于较大的全局图中的一个社区之一。在这种情况下,天真的子图FL将从训练有异质图分布的本地GNN模型中崩溃不相容的知识。为了克服这样的局限性,我们引入了一个新的子图FL问题,即个性化的子图FL,该子图专注于相互关联的本地GNN模型的联合改进,而不是学习一个单一的全球GNN模型,并提出了一个新颖的框架,并提出了一个新型的框架,并提出了一个联合的个性化次级学习( Fed-pub),以解决它。个性化子图FL中的一个至关重要的挑战是服务器不知道每个客户端具有哪个子图。 Fed-pub因此使用随机图作为输入来计算它们之间的相似性,并使用它们执行对服务器端聚合的加权平均。此外,它在每个客户端学习一个个性化的稀疏掩码,以选择和更新聚合参数的子图相关子集。我们考虑了非重叠和重叠子图的六个数据集中的Fed-Pub在六个数据集上的子图FL性能,我们的基本上要优于相关的基线。
translated by 谷歌翻译
大规模蛋白质语言模型(PLM)在蛋白质预测任务中的性能提高,范围从3D结构预测到各种功能预测。特别是,Alphafold(一种开创性的AI系统)可能会重塑结构生物学。但是,尚未探索超出结构预测的AlphaFold,Evoformer的PLM模块的效用。在本文中,我们研究了三个流行PLM的表示能力:ESM-1B(单序),MSA转换器(多个序列比对)和Evoformer(结构),并特别关注Evoformer。具体而言,我们旨在回答以下关键问题:(i)作为Alphafold的一部分,Evoformer是否会产生可预测蛋白质功能的表示形式? (ii)如果是的,可以替换ESM-1B和MSA转换器? (iii)这些PLM多少依赖于进化相关的蛋白质数据?在这方面,他们彼此补充吗?我们通过实证研究以及新的见解和结论来比较这些模型。最后,我们发布代码和数据集以获得可重复性。
translated by 谷歌翻译
复杂物理动态的建模和控制在真实问题中是必不可少的。我们提出了一种新颖的框架,通常适用于通过用特殊校正器引入PDE解决方案操作员的代理模型来解决PDE受约束的最佳控制问题。所提出的框架的过程分为两个阶段:解决PDE约束(阶段1)的解决方案操作员学习并搜索最佳控制(阶段2)。一旦替代模型在阶段1训练,就可以在没有密集计算的阶段2中推断出最佳控制。我们的框架可以应用于数据驱动和数据的案例。我们展示了我们对不同控制变量的各种最优控制问题的成功应用,从泊松方程到汉堡方程的不同PDE约束。
translated by 谷歌翻译
有源推断可以被定义为具有生物可粘合模型的脑的贝叶斯建模。其主要思想依赖于自由能原理和药剂的优先偏好。代理人将选择一个导致其前后偏好的行动,以便将来的观察结果。在本文中,我们声称可以使用强化学习(RL)算法来解释有源推断,并在它们之间找到理论连接。我们扩展了预期的自由能量(EFE)的概念,这是有源推理的核心量,并要求EFE可以被视为负值函数。通过前后偏好的概念和理论连接的概念,我们提出了一种简单但新的方法来学习从专家的先前偏好。这说明可以通过有源推断的新视角来接近逆R1的问题。先前偏好学习的实验结果表明,基于EFE的奖励和应用于反向RL问题的可能性。
translated by 谷歌翻译
We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译
Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译